Gradient-free distributed optimization with exact convergence
نویسندگان
چکیده
In this paper, a gradient-free distributed algorithm is introduced to solve set constrained optimization problem under directed communication network . Specifically, at each time-step, the agents locally compute so-called pseudo-gradient guide updates of decision variables, which can be applied in fields where gradient information unknown, not available or non-existent. A surplus-based method adopted remove doubly stochastic requirement on weighting matrix, enables implementation graphs having no associated matrix. For convergence results, proposed able obtain exact optimal value with any positive, non-summable and non-increasing step-sizes. Furthermore, when step-size also square-summable, guaranteed achieve an solution. addition standard analysis, rate investigated. Finally, effectiveness verified through numerical simulations.
منابع مشابه
Distributed Gradient Optimization with Embodied Approximation
We present an informal description of a general approach for developing decentralized distributed gradient descent optimization algorithms for teams of embodied agents that need to rearrange their configuration over space and/or time, into some optimal and initially unknown configuration. Our approach relies on using embodiment and spatial embeddedness as a surrogate for computational resources...
متن کاملConvergence Analysis of Distributed Stochastic Gradient Descent with Shuffling
When using stochastic gradient descent (SGD) to solve large-scale machine learning problems, a common practice of data processing is to shuffle the training data, partition the data across multiple threads/machines if needed, and then perform several epochs of training on the re-shuffled (either locally or globally) data. The above procedure makes the instances used to compute the gradients no ...
متن کاملConvergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization
In many modern machine learning applications, structures of underlying mathematical models often yield nonconvex optimization problems. Due to the intractability of nonconvexity, there is a rising need to develop efficient methods for solving general nonconvex problems with certain performance guarantee. In this work, we investigate the accelerated proximal gradient method for nonconvex program...
متن کاملGradient algorithms for quadratic optimization with fast convergence rates
We propose a family of gradient algorithms for minimizing a quadratic function f(x) = (Ax, x)/2− (x, y) in R or a Hilbert space, with simple rules for choosing the step-size at each iteration. We show that when the step-sizes are generated by a dynamical system with ergodic distribution having the arcsine density on a subinterval of the spectrum of A, the asymptotic rate of convergence of the a...
متن کاملConvergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization
A. Proof of Theorem 1 We first recall the following lemma. Lemma 1 (Lemma 1, (Gong et al., 2013)). Under Assumption 1.{3}. For any η > 0 and any x,y ∈ R such that x = proxηg(y − η∇f(y)), one has that F (x) ≤ F (y)− ( 1 2η − L 2 )‖x− y‖ . Applying Lemma 1 with x = xk,y = yk, we obtain that F (xk) ≤ F (yk)− ( 1 2η − L 2 )‖xk − yk‖ . (12) Since η < 1 L , it follows that F (xk) ≤ F (yk). Moreover, ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Automatica
سال: 2022
ISSN: ['1873-2836', '0005-1098']
DOI: https://doi.org/10.1016/j.automatica.2022.110474